27 research outputs found

    Computing Weakest Strategies for Safety Games of Imperfect Information

    Get PDF
    CEDAR (Counter Example Driven Antichain Refinement) is a new symbolic algorithm for computing weakest strategies for safety games of imperfect information. The algorithm computes a fixed point over the lattice of contravariant antichains. Here contravariant antichains are antichains over pairs consisting of an information set and an allow set representing the associated move. We demonstrate how the richer structure of contravariant antichains for representing antitone functions, as opposed to standard antichains for representing sets of downward closed sets, allows CEDAR to apply a significantly less complex controllable predecessor step than previous algorithms

    La marche céleste: une marche oublieuse dans les subdivisions convexes à terminaison garantie

    Get PDF
    We present a new oblivious walking strategy for convex subdivisions. Our walk isfaster than the straight walk and more generally applicable than the visiblity walk. To provetermination of our walk we use a novel monotonically decreasing distance measure.Nous présentons une nouvelle stratégie de marche pour les subdivisions convexes.Cette stratégie est oublieuse, c’est à dire que la prochaine cellule visitée ne dépends pas des cellulesvisitées précédemment. Notre marche est plus rapide que la marche rectiligne et s’applique à dessubdivisions plus générales que la marche par visibilité. La démonstration de terminaison reposesur la décroissance monotone d’une nouvelle distance mesurant le progrès de la march

    Celestial Walk: A Terminating, Memoryless Walk for Convex Subdivisions

    Get PDF
    International audienceA common solution for routing messages or performing point location in planar subdivisions consists in walking from one face to another using neighboring relationships. If the next face does not depend on the previously visited faces, the walk is called memoryless. We present a new memoryless strategy for convex subdivisions. The known alternatives are straight walk, which is a bit slower and not memoryless, and visibility walk, which is guaranteed to work properly only for Delaunay triangulations. We prove termination of our walk using a novel distance measure that, for our proposed walking strategy, is strictly monotonically decreasing

    Compositional Synthesis of Safety Controllers

    Get PDF
    In my thesis I investigate compositional techniques for synthesis of safety controllers. A safety controller, in this context, is a state machine that gives the set of safe control outputs for every possible sequence of observations from the plant under control. Compositionality, in this context, refers to the ability to compose the plant model with a safety controller that is derived in a local context, meaning we only consider a selected subsets of the full set of plant model components. The main research question addressed in the thesis is how compositional techniques can have a beneficial effect on scalability. Here scalability applies to the way the running time and memory requirements of the synthesis algorithm increase with the number of plant model components. The working hypothesis was that compositionality should indeed have a beneficial impact on scalability. The intuition behind this is that using compositional techniques we should be able to avoid or at least partly alleviate the type of state explosion problem that is typically seen when synthesizing controllers for larger plant models that consist of the paralel composition of multiple plant model components. The experimental results presented in the thesis are positive in the sense that they indeed support the working hypothesis. We see on a natural example the compositional algorithm exhibits linear scaling behavior whereas the monolithic (non compositional) algorithm exhibits super–exponential scaling behavior. We see this even for an example that intrinsically requires a combination of local control constraints and a global control constraint, where the local constraints are each in turn dependent on a small number of adjacent plant components, whereas the global constraint is intrinsically dependent on all plant model components simultaneously. A first main contribution is a symbolic algorithm that works directly on a compact symbolic representation of the controller thereby avoiding explicit construction of the underlying state graph. The algorithm works by refining the representation of the control strategy in a counterexample driven manner. Upon termination the algorithm will yield a symbolic representation of the most permissive, safe control strategy for the given plant model. The algorithm is specifically designed for models that feature partial observability, meaning that certain internal state of the plant model is not directly observable by the controller. A second main contribution is a compositional technique that also explicitly takes partial observability into account. For this we develop a compositional algorithm that invokes the aforementioned strategy refinement algorithm repeatedly. In particular the compositional algorithm performs a two step synthesis process for each relevant subset of the plant model: (1) computation of the local context which effectively forms a local overapproximation of the allowable behavior, (2) computation of the local controller which effectively forms a local underapproximation of the deniable behavior. We prove that upon termination of the algorithm the context and the controller signatures coincide and we obtain precisely the desired most permissive safety controller, yet constructed in an incremental, compositional fashion. What sets these contributions apart from other contributions in the field is the fact that I consider compositionality in combination with partial observability, and also the fact that the resulting compositional algorithm does not rely on any type of explicit, congruence based state minimization procedure. Even though the two aforementioned main contributions can be considered separately, it may be more informative to view them in combination: it is the compositional algorithm that manages to exploit to the maximal extent the symbolic strategy refinement algorithm that underlies it, or, vice versa, it is the symbolic strategy refinement algorithm that enables the compositional algorithm that relies on it to scale well on larger problem instances
    corecore